Distributed Object Replication in a Cluster of Workstations
نویسندگان
چکیده
This article is concerned mainly with the software aspects of building reliable and efficient services on a cluster of workstations. A key technology to achieve such a goal is service replication. However, designing and implementing a replication system is a very difficult task. Based on an active replication model, this paper focuses on an objectoriented design pattern to simplify the design and implementation of distributed replications.
منابع مشابه
Transparent Fault Tolerance for Parallel Applications on Networks of Workstations
This paper describes a new method for providing transparent fault tolerance for parallel applications on a network of workstations. We have designed our method in the context of shared object system called SAM, a portable run-time system which provides a global name space and automatic caching of shared data. SAM incorporates a novel design intended to address the problem of the high communicat...
متن کاملExperience of Adaptive Replication in Distributed File Systems
Replication is a key strategy for improving locality, fault tolerance and availability in distributed systems. The paper focuses on distributed file systems and presents a system to transparently manage file replication through a network of workstations sharing the same distributed file system. The system integrates an adaptive file replication policy that is capable of reacting to changes in t...
متن کاملDistributed Shared Memory Management for Java
Jackal is a fine-grained distributed shared memory system that can run multithreaded Java programs on distributedmemory systems. The Jackal compiler generates an access check for every use of an object field or array element. The overhead of the access checks is reduced using compiler optimizations. The runtime system uses a homebased consistency protocol that manages (and caches) objects and a...
متن کاملAn Experimental Evaluation of Performance of A Hadoop Cluster on Replica Management
Hadoop is an open source implementation of the MapReduce Framework in the realm of distributed processing. A Hadoop cluster is a unique type of computational cluster designed for storing and analyzing large datasets across cluster of workstations. To handle massive scale data, Hadoop exploits the Hadoop Distributed File System termed as HDFS. The HDFS similar to most distributed file systems sh...
متن کاملThe design and implementation of an active replication scheme for distributing services in a cluster of workstations
Replication is the key to providing high availability, fault tolerance, and enhanced performance in a cluster of workstations (COWs). However, building such a system remains as a dicult and challenging task, mainly due to the diculty of maintaining data consistency among replicas and the lack of easy and ecient tools supporting the development procedure. In this paper we propose an active re...
متن کامل